Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated to use CoreDNS #491

Merged
merged 2 commits into from
Jun 12, 2019
Merged

Updated to use CoreDNS #491

merged 2 commits into from
Jun 12, 2019

Conversation

richardcase
Copy link
Contributor

The 'dns' addon has been changed to use CoreDNS by default instead of kube-dns. This brings it in line with CoreDNS being the default from Kubernetes v1.13:
https://kubernetes.io/docs/tasks/administer-cluster/coredns/

Relates to: #271

The 'dns' addon has been changed to use CoreDns by default instead
of kube-dns. This brings it in line with the CoreDNS being the
recommend default with Kubernetes.

Relates to: canonical#271
@ktsakalozos
Copy link
Member

@richardcase thank you for the PR.

Overall it looks great. We need however to take care of the MicroK8s upgrade case. When MicroK8s upgrades it bring together all the mainfests we package. With this PR the KubeDNS yaml manifest will be replaced with the CoreDNS one. As a result a microk8s.disable dns will do nothing about a KubeDNS deployment already running. Think about this case, a user has done a microk8s.enable dns so he has the kubedns deployment active, then the upgrade happens. With this PR there is no way to cleanly remove the old kubedns service. The fix for the upgrade is pretty easy. Do not patch the dns.yaml manifest instead introduce a new coredns.yaml. In the microk8s.enable dns https://github.com/ubuntu/microk8s/blob/master/microk8s-resources/actions/enable.dns.sh#L11 apply the coredns.yaml manifest but in the microk8s.disable dns script https://github.com/ubuntu/microk8s/blob/master/microk8s-resources/actions/disable.dns.sh#L14 delete both the dns.yaml and the coredns.yaml.

Do you think you can give this a try?

@richardcase
Copy link
Contributor Author

richardcase commented Jun 3, 2019

@ktsakalozos - ah yes, very good point. I'll make that change and test the scenario. Thanks for the review.

@balchua
Copy link
Collaborator

balchua commented Jun 3, 2019

Dont you think we should perform checks if kubedns or coredns is already present when enabling the dns? It could be that a careless user (like me) might run it twice causing havok in the cluster. Wdyt?

@richardcase
Copy link
Contributor Author

Dont you think we should perform checks if kubedns or coredns is already present when enabling the dns? It could be that a careless user (like me) might run it twice causing havok in the cluster. Wdyt?

Thats an interesting one @balchua. I think there are at least 2 scenarios we should consider.

Scenario 1 - Calling microk8s.enable multiple times for the same action (i.e. microk8s.enable dns)

On a fresh install of microk8s running microk8s.enable dns twice only results in coredns being applied to the cluster on the first time and not the second. This is because on the second time there are no changes to the yaml:

dns_twice

But it does restart the kubelet. Do you think that microk8s.enable should check the status of the addon and if its 'enabled' then doing nothing? This could apply generically to all addons.

Scenario 2 - Upgrading to coredns
If your cluster already has kube-dns because the dns addon is enabled. Should the user have to disable/enable or should microk8s.enable dns be smart enough to upgrade?

@ktsakalozos
Copy link
Member

It must be allowed to run the same microk8s.enable command multiple times on the same addon. All scripts have to be idempotent meaning no mater what the initial state is the end result should be the same. Think about a single yaml manifest, no matter how many times you apply it the result is always the same. This property makes our life easier as we do not expect the system we configure to have memory.

@richardcase
Copy link
Contributor Author

Thanks @ktsakalozos.

With upgrading kube-dns to coredns on an existing cluster. Are you ok with someone having to do:

  1. microk8s.disable dns
  2. microk8s.enable dns

@ktsakalozos
Copy link
Member

Yes, the user will continue using kubedns up until he does a microk8s.disable dns followed by microk8s.enable dns.

This transition to coredns will come with the 1.15 release in a couple of weeks from now.

After feedback from the first commit the following have been changed:
* CoreDNS k8s artefcats have been added to there own yaml and not
the existing dns.yaml
* The existing dns yaml stays the same
* When you enable the dns addon it will apply the new coredns.yaml
* When you disable the dns addon it will look for running pods and
decide whether to remove dns.yaml or coredns.yaml

When doing an upgrade where the dns addon has been previously enabled
it will show as disabled on `microk8s.status` and kube-dns will be
running. To upgrade the user will be required to do the following:
1. `microk8s.disable dns`
2. `microk8s.enable dns`

Relates to: canonical#271
@richardcase
Copy link
Contributor Author

@ktsakalozos - i made the suggested changes.

Just to let you know if you have a cluster with the dns addon enabled (so kube-dns) and then you updrade the snap to the coredns version then:

  1. microk8s.status will show the addon as disabled. But the kube-dns artifacts still exist in the cluster.
  2. the user will be required to microk8s.disable dns and then microk8s.disable dns to upgrade to coredns.

If 1 isn't acceptable then i could introduce a 'legacy-dns' addon which would be enabled after the upgrade with the 'dns' addon being disabled. Then to upgrade the user would have to microk8s.disable legacy-dns and then microk8s.disable dns.

@ktsakalozos
Copy link
Member

Thank you for the PR @richardcase. I am merging this so it will hit the edge channel within the day and reach stable along with v1.15 scheduled for early next week.

@ktsakalozos ktsakalozos merged commit 4a1d2a7 into canonical:master Jun 12, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants